236 research outputs found

    Semantic Web for Everyone: Exploring Semantic Web Knowledge Bases via Contextual Tag Clouds and Linguistic Interpretations

    Get PDF
    The amount of Semantic Web data is huge and still keeps growing rapidly today. However most users are still not able to use a Semantic Web Knowledge Base (KB) effectively as desired to due to the lack of various background knowledge. Furthermore, the data is usually heterogeneous, incomplete, and even contains errors, which further impairs understanding the dataset. How to quickly familiarize users with the ontology and data in a KB is an important research challenge to the Semantic Web community.The core part of our proposed resolution to the problem is the contextual tag cloud system: a novel application that helps users explore a large scale RDF(Resource Description Framework) dataset. The tags in our system are ontological terms (classes and properties), and a user can construct a context with a set of tags that defines a subset of instances. Then in the contextual tag cloud, the font size of each tag depends on the number of instances that are associated with that tag and all tags in the context. Each contextual tag cloud serves as a summary of the distribution of relevant data, and by changing the context, the user can quickly gain an understanding of patterns in the data. Furthermore, the user can choose to include different RDFS entailment regimes in the calculations of tag sizes, thereby understanding the impact of semantics on the data. To resolve the key challenge of scalability, we combine a scalable preprocessing approach with a specially-constructed inverted index and co-occurrence matrix, use three approaches to prune unnecessary counts for faster online computations, and design a paging and streaming interface. Via experimentation, we show how much our design choices benefit the responsiveness of our system. We conducted a preliminary user study on this system, and find novice participants felt the system provided a good means to investigate the data and were able to complete assigned tasks more easily than using a baseline interface.We then extend the definition of tags to more general categories, particularly including property values, chaining property values, or functions on these values. With a totally different scenario and more general tags, we find the system can be used to discover interesting value space patterns. To adapt the different dataset, we modify the infrastructure with new indexing data structure, and propose two strategies for online queries, which will be chosen based on different requests, in order to maintain responsiveness of the system.In addition, we consider other approaches to help users locate classes by natural language inputs. Using an external lexicon, Word Sense Disambiguation (WSD) on the label words of classes is one way to understand these classes. We propose our novel WSD approach with our probability model, derive the problem formula into small computable pieces, and propose ways to estimate the values of these pieces. For the other approach, instead of relying on external sources, we investigate how to retrieve query-relevant classes by using the annotations of instances associated with classes in the knowledge base. We propose a general framework of this approach, which consists of two phases: the keyword query is first used to locate relevant instances; then we induce the classes given this list of weighted matched instances.Following the description of the accomplished work, I propose some important future work for extending the current system, and finally conclude the dissertation

    Stream privacy amplification for quantum cryptography

    Full text link
    Privacy amplification is the key step to guarantee the security of quantum communication. The existing security proofs require accumulating a large number of raw key bits for privacy amplification. This is similar to block ciphers in classical cryptography that would delay the final key generation since an entire block must be accumulated before privacy amplification. Moreover, any leftover errors after information reconciliation would corrupt the entire block. By modifying the security proof based on quantum error correction, we develop a stream privacy amplification scheme, which resembles the classical stream cipher. This scheme can output the final key in a stream way, prevent error from spreading, and hence can put privacy amplification before information reconciliation. The stream scheme can also help to enhance the security of trusted-relay quantum networks. Inspired by the connection between stream ciphers and quantum error correction in our security analysis, we further develop a generic information-theoretic tool to study the security of classical encryption algorithms.Comment: 21 pages, 8 figure

    Quantifying Coherence with Untrusted Devices

    Full text link
    Device-independent (DI) tests allow to witness and quantify the quantum feature of a system, such as entanglement, without trusting the implementation devices. Although DI test is a powerful tool in many quantum information tasks, it generally requires nonlocal settings. Fundamentally, the superposition property of quantum states, quantified by coherence measures, is a distinct feature to distinguish quantum mechanics from classical theories. In literature, witness and quantification of coherence with trusted devices have been well-studied. However, it remains open whether we can witness and quantify single party coherence with untrusted devices, as it is not clear whether the concept of DI tests exists without a nonlocal setting. In this work, we study DI witness and quantification of coherence with untrusted devices. First, we prove a no-go theorem for a fully DI scenario, as well as a semi DI scenario employing a joint measurement with trusted ancillary states. We then propose a general prepare-and-measure semi DI scheme for witnessing and quantifying the amount of coherence. We show how to quantify the relative entropy and the l1l_1 norm of single party coherence with analytical and numerical methods. As coherence is a fundamental resource for tasks such as quantum random number generation and quantum key distribution, we expect our result may shed light on designing new semi DI quantum cryptographic schemes.Comment: 14 pages, 7 figures, comments are welcome

    Compressive Spectrum Sensing in Cognitive IoT

    Get PDF
    PhDWith the rising of new paradigms in wireless communications such as Internet of things (IoT), current static frequency allocation policy faces a primary challenge of spectrum scarcity, and thus encourages the IoT devices to have cognitive capabilities to access the underutilised spectrum in the temporal and spatial dimensions. Wideband spectrum sensing is one of the key functions to enable dynamic spectrum access, but entails a major implementation challenge in terms of sampling rate and computation cost since the sampling rate of analog-to-digital converters (ADCs) should be higher than twice of the spectrum bandwidth based on the Nyquist-Shannon sampling theorem. By exploiting the sparse nature of wideband spectrum, sub-Nyquist sampling and sparse signal recovery have shown potential capabilities in handling these problems, which are directly related to compressive sensing (CS) from the viewpoint of its origin. To invoke sub-Nyquist wideband spectrum sensing in IoT, blind signal acquisition with low-complexity sparse recovery is desirable on compact IoT devices. Moreover, with cooperation among distributed IoT devices, the complexity of sampling and reconstruc- tion can be further reduced with performance guarantee. Specifically, an adaptively- regularized iterative reweighted least squares (AR-IRLS) reconstruction algorithm is proposed to speed up the convergence of reconstruction with less number of iterations. Furthermore, a low-complexity compressive spectrum sensing algorithm is proposed to reduce computation complexity in each iteration of IRLS-based reconstruction algorithm, from cubic time to linear time. Besides, to transfer computation burden from the IoT devices to the core network, a joint iterative reweighted sparse recovery scheme with geo-location database is proposed to adopt the occupied channel information from geo- location database to reduce the complexity in the signal reconstruction. Since numerous IoT devices access or release the spectrum randomly, the sparsity levels of wideband spec-trum signals are varying and unknown. A blind CS-based sensing algorithm is proposed to enable the local secondary users (SUs) to adaptively adjust the sensing time or sam- pling rate without knowledge of spectral sparsity. Apart from the signal reconstruction at the back-end, a distributed sub-Nyquist sensing scheme is proposed by utilizing the surrounding IoT devices to jointly sample the spectrum based on the multi-coset sam- pling theory, in which only the minimum number of low-rate ADCs on the IoT devices are required to form coset samplers without the prior knowledge of the number of occu- pied channels and signal-to-noise ratios. The models of the proposed algorithms are derived and verified by numerical analyses and tested on both real-world and simulated TV white space signals

    Improved Real-time Post-Processing for quantum Random Number Generators

    Full text link
    Randomness extraction is a key problem in cryptography and theoretical computer science. With the recent rapid development of quantum cryptography, quantum-proof randomness extraction has also been widely studied, addressing the security issues in the presence of a quantum adversary. In contrast with conventional quantum-proof randomness extractors characterizing the input raw data as min-entropy sources, we find that the input raw data generated by a large class of trusted-device quantum random number generators can be characterized as the so-called reverse block source. This fact enables us to design improved extractors. Specifically, we propose two novel quantum-proof randomness extractors for reverse block sources that realize real-time block-wise extraction. In comparison with the general min-entropy randomness extractors, our designs achieve a significantly higher extraction speed and a longer output data length with the same seed length. In addition, they enjoy the property of online algorithms, which process the raw data on the fly without waiting for the entire input raw data to be available. These features make our design an adequate choice for the real-time post-processing of practical quantum random number generators. Applying our extractors to the raw data of the fastest known quantum random number generator, we achieve a simulated extraction speed as high as 374 Gbps.Comment: 11 pages, 3 figure
    • …
    corecore